This paper contributes improvements on both the effectiveness and efficiencyof Matrix Factorization (MF) methods for implicit feedback. We highlight twocritical issues of existing works. First, due to the large space of unobservedfeedback, most existing works resort to assign a uniform weight to the missingdata to reduce computational complexity. However, such a uniform assumption isinvalid in real-world settings. Second, most methods are also designed in anoffline setting and fail to keep up with the dynamic nature of online data. Weaddress the above two issues in learning MF models from implicit feedback. Wefirst propose to weight the missing data based on item popularity, which ismore effective and flexible than the uniform-weight assumption. However, such anon-uniform weighting poses efficiency challenge in learning the model. Toaddress this, we specifically design a new learning algorithm based on theelement-wise Alternating Least Squares (eALS) technique, for efficientlyoptimizing a MF model with variably-weighted missing data. We exploit thisefficiency to then seamlessly devise an incremental update strategy thatinstantly refreshes a MF model given new feedback. Through comprehensiveexperiments on two public datasets in both offline and online protocols, weshow that our eALS method consistently outperforms state-of-the-art implicit MFmethods. Our implementation is available athttps://github.com/hexiangnan/sigir16-eals.
展开▼